TAGGED IN

Technology & innovation

    How LTE-M and NB-IoT Are Revolutionizing Asset Tracking in Global Supply Chains

    In the past, tracking a shipping container across continents or monitoring the temperature of a pharmaceutical package in a rural warehouse came with trade-offs: cost, power drain, or unreliable coverage. Asset visibility was reserved for high-value goods, while the rest of the supply chain operated on estimates, paper trails, and phone calls. This is changing. Two cellular technologies—LTE-M and NB-IoT—are now reshaping long-distance asset tracking. Designed specifically for low-power, wide-area connectivity, they are not flashy or fast, but they are practical. And that practicality is unlocking a new standard of visibility across logistics networks. A Functional Divide: What Makes LTE-M and NB-IoT Different? Both LTE-M (Long Term Evolution for Machines) and NB-IoT (Narrowband Internet of Things) were developed under the 3GPP standard. They’re not general-purpose wireless technologies. They’re built for a narrow job: to allow simple devices to send small data packages across long distances with minimal power usage. Still, they serve different purposes. LTE-M supports voice (VoLTE), real-time mobility, and bandwidth up to 1.4 MHz. That makes it suited for assets in motion—trucks, railcars, shipping containers. Devices stay connected as they cross cell towers, even at highway speeds. NB-IoT, on the other hand, is optimized for stationary or slow-moving assets. It operates on just 180 kHz of bandwidth, uses even less power than LTE-M, and excels at indoor or underground penetration. That makes it ideal for warehouse environments, shipping pallets, or cold storage units. In both cases, devices can last up to 10 years on a battery, waking only to transmit data at defined intervals. On the Ground: Where These Technologies Are Already Working In Germany, Deutsche Telekom uses NB-IoT to track reusable transport packaging—low-value but often misplaced. By tagging these items with NB-IoT sensors, they can recover more assets and reduce losses. In the United States, Roambee uses LTE-M to track pharmaceutical shipments. Their sensors capture not just GPS data, but also temperature, humidity, and light exposure—essential data for compliance and quality control. Reference Meanwhile, Sierra Wireless now Semtech offers LTE-M modules built into trackers used across North American freight networks. They enable cross-border asset visibility without requiring complicated roaming workarounds. Reference These are operational deployments—quietly streamlining real supply chains today. From Fragmented Data to Structured Insight For decades, asset tracking systems lived in silos—fleet telematics in one system, warehouse sensors in another, handheld barcode scans in a third. LTE-M and NB-IoT allow these devices to transmit consistent, time-stamped data that can be ingested directly into ERP, WMS, or TMS platforms. This leads to several operational changes: Shipment ETAs can now be calculated using real-time location data. Route deviations, temperature excursions, or tampering events can trigger alerts instantly. Idle asset time, loss rates, and turn rates can be tracked quantitatively rather than through assumptions. These technologies give the supply chain a memory. Not just precise locations—but how long they’ve been there, under what conditions, and whether any anomalies occured. Strategic Implications: More Than Better Tracking The shift to LTE-M and NB-IoT is not just about technical improvement. It represents a change in how companies define what is worth tracking. 1. Lowering the Threshold When connectivity was expensive and battery life short, only high-value goods justified GPS trackers. LPWA (low power wide area) tech brings that threshold down. Companies can now afford to track plastic pallets, returnable containers, or temperature-sensitive packaging. These items were once considered disposable or unmonitored; now, they are part of the data flow. 2. Preparing for 2G/3G Sunset Many tracking systems still rely on 2G or 3G cellular modules. As networks phase out legacy services, companies are being forced to choose a replacement. LTE-M and NB-IoT offer a forward-compatible path that avoids the higher costs of full LTE or 5G broadband connections. 3. Shifting from Status to Intelligence In the past, knowing an asset’s last known location was sufficient. Today, organizations are using real-time sensor data to move from reactive to predictive operations. Cold chain breaches, delivery delays, or maintenance needs can be identified before they create service failures or lost revenue. 4. Expanding Global Logistics LTE-M supports broader international roaming than NB-IoT, but both are becoming more accessible globally as carriers standardize their infrastructure. For multinational operations, this means fewer gaps and less complexity in deploying one global asset tracking framework. Constraints and Deployment Realities These technologies are not without limitations: Roaming for NB-IoT is still fragmented, limiting its use in cross-border applications. Latency for NB-IoT can be high, making it unsuitable for urgent alerts or rapid two-way communication. Device provisioning and firmware updates must be managed remotely, especially for assets deployed in hard-to-reach areas. As a result, organizations must carefully evaluate their device requirements, data latency tolerance, and regional coverage before choosing between LTE-M and NB-IoT. Supply chains are being redefined not by high-profile innovations, but by infrastructure technologies like LTE-M and NB-IoT that enable quiet, scalable change. These technologies do not replace the need for planning or coordination, but they reduce uncertainty. They give supply chain managers access to verified data instead of estimates. They allow decisions to be made with more context, and fewer assumptions. In practical terms, LTE-M and NB-IoT make it feasible to track every pallet, every package, and every trailer—not just those deemed high-value. That’s not a breakthrough, it is how resilient, modern supply chains operate today.   The post How LTE-M and NB-IoT Are Revolutionizing Asset Tracking in Global Supply Chains appeared first on Logistics Viewpoints.

    Turning sewage into power: The next big energy solution?

    Reading Time: 4 minutes  When RMIT University’s Professor Kalpit Shah began crafting a proposal for his Australian Research Council Fellowship in 2017, he didn’t anticipate that it would lead to one of Australia’s most promising waste-to-resource technologies. Today, his innovation – PYROCO – is on the verge of commercial rollout, offering a revolutionary way to treat biosolids and cut carbon emissions. Yay! Save planet! PYROCO is a technology that uses high heat without oxygen to turn sewage waste (biosolids) into biochar, a carbon-rich material. This biochar can then be used to produce bio-oils rich in phenols, offering a cleaner, cheaper alternative to petroleum-based chemicals used in the construction, electronics, and automotive industries. The innovation could significantly cut carbon emissions and support a more sustainable supply chain for essential industrial materials. “PYROCO is designed to handle a variety of waste materials and renewable biomass feedstocks through a process called pyrolysis, which involves thermal decomposition,” explains Shah, a distinguished chemical engineer. “In early 2017, I engaged in discussions with a local water utility in Melbourne, South East Water, which was interested in exploring biosolids pyrolysis as a backup plan for managing biosolids. When they learned about PYROCO, they expressed a strong interest in collaborating with us.” RMIT researchers inspect the operation of the PYROCO Mark-2 pilot unit at Melton Water Recycling Plant. (Source: Seamus Daniel, RMIT University) After more than eight years of collaboration with them and later Intelligent Water Networks and Barwon Water, PYROCO has now reached a critical milestone: construction of the first commercial demonstration production plant at one of South East Water’s water recycling facilities. And why should industries pay attention to this innovation? Because biochar can potentially play a key role in reducing global carbon emissions footprint. “According to the Australia New Zealand Biochar Industry Group (ANZBIG), this conversion from biowaste to biochar could reduce net emissions in Australia alone by 10-15%.” Biochar or bio-gold? What sets PYROCO apart from other pyrolysis technologies is its unique reactor design. Traditional systems like rotary kilns or augers often require moving parts and heavy maintenance. PYROCO, on the other hand, has no moving parts, which leads to lower maintenance costs and better plant availability and lifespan. “Pyrolysis is an endothermic reaction, meaning it needs energy to occur. Good heat transfer is essential for this process,” Prof Shah says. “PYROCO provides excellent heat transfer by turning a regular fluidised bed into a more effective heat exchanger device.” Additionally, he says that PYROCO offers flexibility in managing seasonal variations in the quality of biosolids. “The biochar produced can be used in high-end applications, such as serving as a catalyst for converting biomass into phenolic oil, as recently demonstrated by our group in a renewable energy journal.” The technology, known as PYROCO, uses high temperatures without oxygen to convert treated sewage (biosolids) into a carbon-rich product called biochar, which can act as a catalyst to produce phenol-rich bio-oil. (Source: Seamus Daniel) For his research, Shah, Deputy Director (Research) of the ARC Training Centre for the Transformation of Australia’s Biosolids Resource, collaborated with the Indian Institute of Petroleum. PYROCO’s smart design also tackles one of the most serious environmental challenges: PFAS contamination. These toxic, cancer-linked chemicals are notoriously difficult to eliminate, but Shah says PYROCO achieves a 99.99% destruction rate. “PFAS present in biosolids can be vaporised during pyrolysis. After pyrolysis, thermal oxidisers are employed to combust the pyrolysis oil and gas vapours. These thermal oxidisers operate at sufficiently high temperatures to not only facilitate combustion but also destroy PFAS. PYROCO offers a pyrolysis reactor that maintains uniform temperature and provides a longer solids residence time, which allows for more efficient vaporisation of PFAS compared to other commercial reactors.” In their trials, PFAS levels in all output streams were below detection limits, he claims. (L-R) Dr Ganesh Veluswamy (RMIT researcher), Ross Weston (Engineering Manager, South East Water), Dr Aravind Surapaneni (Principal Scientist, SEW), Eamon Casey (Technical Director, Iota), Profesor Kalpit Shah (Project Lead, RMIT), Dr Ibrahim Hakeem (RMIT researcher) and Dr Savankumar Patel (RMIT researcher) in front of the PYROCO technology. (Source: supplied) Greener industrial future Beyond emissions reductions, the carbon-rich biochar produced by PYROCO is opening doors in multiple industries. In construction, it can replace cement or act as an additive in road and bridge building. In electronics, it serves as a key material in battery electrodes. And in the automotive sector, it can help generate renewable fuels that offset fossil fuel use. Currently, the team is working with international partners to offer PYROCO as a cost-effective solution in both developing and developed countries. “We have examined biosolids from developing countries like India and do not see any problems with using PYROCO there. In fact, when the quality of biosolids is lower, PYROCO can help meet energy needs effectively,” he adds. And for the commercial demonstration plant project, Aqua Metro has stepped in, leading both construction and operation. Shah sees advanced pyrolysis as central to a cleaner future. “In ten years, I expect pyrolysis to play a major role in waste recycling, renewable energy production, and circular economy targets,” he says. “PYROCO is just the beginning.” Read more: Ashak Nathwani: Defeating Mr CO2 and building carbon-conscious homes The post Turning sewage into power: The next big energy solution? appeared first on Indian Link.

  1. How Meta Thinks About Personalization and Privacy

    Personalization, which tailors content based on user preference, has become widely used on virtually every social media platform. By providing users with relevant content that appeals to their unique interests, no two social media feeds are the same. Personalized social media posts can lead to a 50 percent increase in user engagement, as they resonate more deeply with individual preferences. This practice is so widely used that it has become a deep-seated expectation for users—74 percent of consumers feel frustrated when content isn’t personalized. But as social media platforms integrate personalization technology, questions around privacy, transparency, and user choice are becoming increasingly pronounced. Rob Sherman is Vice President and Deputy Chief Privacy Officer for Policy at Meta. He joined the company in 2012—at the time called Facebook—and has since then diligently worked to protect user privacy while promoting innovation. Before joining Meta, Rob was a lawyer at Covington & Burling LLP, specializing in technology and media. Below is a lightly edited and abridged transcript of our discussion. You can listen to this and other episodes of Explain to Shane on AEI.org and subscribe via your preferred listening platform. If you enjoyed this episode, leave us a review, and tell your friends and colleagues to tune in. Shane Tews: Let’s walk through how personalization works on social media apps, and how it can provide users with a better experience. Rob Sherman: It’s exciting that when I think about how my kids are growing up, they’re growing up with so much choice and so much ability to get the experience that works for them. That’s the same experience that I have now. So when I look at my Instagram feed, I am interested in travel. And so I get a lot of information about travel. I’m interested in things to do with my kids and I get information about that. I’m a vegetarian. So I get stuff about vegetarian recipes, like these are the kinds of things that I see and choose to consume on Instagram. The benefit of this is you can actually curate the experience that you want to have, and in general, your experience is going to be different than mine. You choose who you want to follow and that gives a signal to our systems of this is the kind of content that Shane is interested in. You might click on certain things more than others. You might choose to comment on or like things. Those are all signals that help our system decide what to show you. And the goal is really to give you an enriching experience that gives you the content that is most important for you. This personalization also informs ad choices. I think a lot of people don’t know that they can find out almost everything that Instagram or Facebook knows about them and if I don’t like something, they can augment it or change it. Explain to our listeners how they can find the inventory of what information Meta properties have about them. Yes, when you see an ad on Instagram, there’s a way that you can find out why you saw that ad. Ideally, the ads that you’re seeing are really useful, valuable to you. They’re things that you would want to buy. I just actually found a Mother’s Day present for my wife, which she doesn’t know about yet, but will find out in a couple of days because we’re recording this right before Mother’s Day. So I found that in an Instagram ad. But ideally, you’re getting those because they’re things that you would actually want. But you can also click on the ad and say, why am I seeing this? And you’ll get an explanation for what are the factors that we considered to think that you might want this. And then if you disagree, if we actually got it wrong, you can give us that feedback as well. So the idea is really, rather than all of us having to have that same content, we can each have the experience that we want, and then we can have a say in curating it in the way that is best for us. This is what we call ad preferences. This system is one of the factors that informs what ads you see. What it’s doing is it’s using a combination of things I’m interested in, the things I explicitly choose to follow the and things that I engage with, to decide what it’s going to want to surface to me. Part of that is just looking at, if people who like this particular page on Facebook are likely to be interested in these topics. Some of it is general, relating to broad populations. If you go into your ad settings, there’s a place where you can see a list of those topics. And then, like I said earlier, this will give you information about how those interests or other things you engaged with informed our choice to show you that particular ad at that moment. By default, it’s built to do the heavy lifting for you. The idea is to deliver personalization for everyone in a way that works for them, but then also give them the ability to dig in if they want to. So that brings me to a question of how things work on the back end. As you have acquired different companies to become part of your suite of services, do my preferences follow me? There’s no reason for you to have separate technical teams for each platform, such as Facebook, Instagram, WhatsApp, and Oculus, because they share a lot of the same information. But have you encountered anything that users might be concerned about? One of the things I think is important is that when we’re building this technology, it’s increasingly working together. However, it’s also important to note that we have a feature called Account Center, which allows you to link your accounts together in the way you’re describing. If I would rather have my Instagram account be totally separate from my Facebook account, that’s absolutely something that I can do. I think from a starting point, giving people the ability to decide how they want to use different products together is an important piece. Most of us have different ways that we engage with the different platforms. So one of the things that we think about is how do we segregate that data and make sure that it’s used to deliver the services that you’re actually looking for and that the information is being used in ways that you expect and want. We try to be really transparent about that and make sure people know and have choices about it. Actually, one of the primary focuses of our privacy engineering team is building back-end technologies to ensure that data is used in the intended manner and not misused in other ways. Switching topics, because I know you just had a Llamacon, and AI is everyone’s favorite topic right now. What’s going on with Llama? I was just in California for LlamaCon, which is our first Llama developers’ conference. And it was really great to be together with developers from around the world who are using Llama to build incredible things. One of the things that we try to do is to both deploy technology and then provide support to the ecosystem to help this technology (in this case open-source AI) to be valuable and create value for people around the world. One of the grants that we gave out through our Llama Impact Program was for a developer that’s using Llama to help people get access to government services and understand how to navigate the various programs that are available to them that they might not know how to get access to. In India, there’s a developer that’s using Llama to deliver personalized language and literacy instruction. In a country where you might not have the scale for every kid to have a teacher who can give them personalized experience, being able to do that on WhatsApp from your phone is actually really powerful. One of the things that I think is particularly important about the idea of open-sourcing is, if you think about a company like ours, we’re not education experts, to use the example that I just gave. These are kinds of things that we would never have the expertise to build ourselves, but by deploying this technology and open-sourcing it, we can actually enable the ecosystem to build on top of it and do all these really neat things. Future-casting here: anything that you know that I don’t know that we should talk about? I think the thing that I took away from LlamaCon more than anything else is how broad and diverse the uses of this technology are. There were lots of different developers there, but the small developers who were doing really unique, really interesting things, stuck out to me. We had poster presentations where different developers could demonstrate what they were doing. And one of the developers that I ran into was building American Sign Language on Llama using WhatsApp. And so what it meant was you would have the ability to type something in and it would demonstrate how to say that in American Sign Language, but then you could also use your camera to record somebody signing to you and then it would translate that into written text. This was a small developer that didn’t have a lot of resources, but was bootstrapped by being able to build on top of Llama. That’s really, really incredible to think about what we’re able to do. When I look toward the future, one of the big challenges when we look at this new technology is that I think it’s going to give us a lot more choice. And I think that this technology is going to give each of us the ability to not have to rely on a developer to build technology exactly for our use case, but actually to be able to just tell the computer what you want and. I also think there are benefits of having that technology integrated into your life, like you and me being able to use wearable technology to talk to each other, even if we’re not in the same place. I do think the big challenge, though, is that getting this right is going to require us to challenge the orthodoxy of our instinctive answers to a lot of these questions. Learn more: Agents, Access, and Advantage: Lessons from Meta’s LlamaCon | Rebuilding the Transatlantic Tech Alliance: Why Innovation, Not Regulation, Should Guide the Way | My AI Advisers: Lessons from a Year of Expert Digital Assistants | Why Meta’s Change in Fact-Checking Is Good for Democracy The post How Meta Thinks About Personalization and Privacy appeared first on American Enterprise Institute - AEI.

    Special offers on Mahindra XUV 700

    Reading Time: 2 minutes The MY25 Mahindra XUV700 is a versatile SUV designed to accommodate various family sizes and lifestyles. With seating configurations for up to seven passengers, it caters equally well to couples seeking countryside adventures in Australia’s great outdoors or larger families needing to travel together across the city. For couples, the MY25 XUV700 AX7L model offers a comfortable and stylish ride, featuring dual 10.25-inch screens, wireless Apple CarPlay and Android Auto connectivity, and a panoramic sunroof. Its 2.0-litre turbocharged engine delivers 149kW and 380Nm, ensuring a smooth drive on both highways and rural roads. Growing families benefit from the spacious interior, with ample room in the third row and a generous boot space when the seats are folded down. In exciting news, Mahindra Automotive Australia has just announced its End of Financial Year (EOFY) Bonus program, effective immediately across participating dealerships nationwide. From now until June 30, 2025, customers will receive a $3000 Factory Bonus on selected in-stock models, including the MY25 XUV700 AX7 and AX7L. The MY25 XUV700 offers exceptional value in a congested SUV market for those looking for a reliable and well-equipped SUV. Mahindra’s commitment to quality and innovation was recognised when it was named the 2024 Indian Company of the Year by The Economic Times, highlighting its achievements in the automotive sector. Mahindra Australia is also proudly marking 20 years of operations in Australia. Today, Mahindra boasts a growing workforce of 40 employees and a strong network of more than 70 dealers across the country. *This is a sponsored post Read more: Mahindra set to introduce XUV700 Family SUV in Australia The post Special offers on Mahindra XUV 700 appeared first on Indian Link.

    Generative AI and Fabricated Judicial Opinions: A Slow Learning Curve for Some Attorneys

    On the final day of my civil procedure course, Professor Brian Landsberg offered a piece of advice. At first blush, it seemingly had nothing to do with the myriad federal rules and landmark cases like Pennoyer v. Neff that we’d studied. Yet, it’s a pearl of wisdom I remember more than 35 years later: Never take square corners roundly. As anxious first-year law students, we were probably tempted to ask Professor Landsberg whether that maxim would appear on the exam. (Thankfully, no one did.) What the civil rights litigator who apparently got stuck teaching civ pro meant, of course, was don’t take shortcuts when it comes to the law and be sure to scrupulously follow the rules. Via Twenty20. I’m reminded of Professor Landsberg’s cogent counsel because some attorneys continue to take shortcuts in legal research by using generative artificial intelligence (Gen AI) tools when searching for cases to support their motions and, unfortunately, failing to verify whether they are real. By now, all practicing attorneys should know that Gen AI tools sometimes “hallucinate” (a kinder, gentler way of saying fabricate or make up) non-existent opinions. Not taking the time to confirm whether cases spat out by Gen AI tools are genuine is like a law firm partner failing to check the work of a first-year associate who just passed the bar exam. I first addressed the problem in June 2023, describing how a federal judge in Manhattan had sanctioned two attorneys for including Gen AI-produced fake judicial opinions in a case called Mata v. Avianca, Inc. As US District Judge P. Kevin Castel put it, “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” Castel added that citing “bogus opinions” not only “wastes time and money in exposing the deception,” but also “promotes cynicism about the legal profession and the American judicial system.” In January 2024, I discussed how Michael Cohen, the now-disbarred attorney who formerly worked for Donald Trump, used a Gen AI tool that produced three non-existent opinions that Cohen then passed along to his attorney who, in turn, incorporated them into a legal filing. Although neither Cohen nor his attorney was later sanctioned, US District Judge Jesse Furman in March 2024 called the incident “embarrassing” and wrote that “[g]iven the amount of press and attention that Google Bard and other generative artificial intelligence tools have received, it is surprising that Cohen believed it to be a ‘super-charged search engine’ rather than a ‘generative text service.’” In July 2024, the American Bar Association (ABA) issued a formal opinion regarding attorneys’ usage of Gen AI tools. It asserts that: Because [Gen AI] tools are subject to mistakes, lawyers’ uncritical reliance on content created by a [Gen AI] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties. Therefore, a lawyer’s reliance on, or submission of, a [Gen AI] tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation . . . The opinion goes on to stress that “[a]s a matter of competence . . . lawyers should review for accuracy all [Gen AI] outputs.” Unfortunately, news broke in February of yet another incident of attorneys stuffing motions with fake cases produced by Gen AI tools. This incident involved a major firm, Morgan & Morgan, that calls itself “America’s Largest Injury Law Firm” and says it tries “more cases than any other firm in the country.” According to an order to show cause filed on February 6 by US District Judge Kelly Rankin of Wyoming, a motion submitted by Morgan & Morgan and the Goody Law Group in Wadsworth v. Walmart, Inc. cited a whopping nine cases that simply don’t exist. Four days later, the plaintiffs’ attorneys jointly responded, acknowledging the cases “were not legitimate” and explaining that “[o]ur internal artificial intelligence platform ‘hallucinated’ the cases in question while assisting our attorney in drafting the motion.” They dubbed it “a cautionary tale for our firm and all firms, as we enter this new age of artificial intelligence.” Unfortunately, the cautionary tale had already occurred more than 18 months earlier in Mata v. Avianca, Inc. noted above. It generated coverage in The New York Times and an article in The National Law Review headlined “A ‘Brief’ Hallucination by Generative AI Can Land You in Hot Water.” The plaintiffs’ attorneys in Wadsworth, three of whom were individually sanctioned by Rankin with minimal fines (neither Morgan & Morgan nor the Goody Law Firm was sanctioned), seemingly missed the news. Going forward, they should heed the ABA’s opinion and Professor Landsberg’s advice about not taking shortcuts. Learn more: Regulating Complex and Uncertain AI Technologies | What a Novel-Writing Organization’s Demise Teaches Us About AI | Free Speech Tradeoffs and Role-playing Chatbots: Sacrificing the Rights of Many to Safeguard a Few? | ​​Generative AI: The Emerging Coworker Transforming Teams The post Generative AI and Fabricated Judicial Opinions: A Slow Learning Curve for Some Attorneys appeared first on American Enterprise Institute - AEI.

    No More Tappers: Get Skin in the Game

    Irony of ironies: Outrage around Jake Tapper and Alex Thompson’s book, Original Sin, is helping to sell more copies. The failure of a CNN anchor and an Axios reporter to cover President Joe Biden’s infirmities in a timely manner now delivers them free advertising for their book and a bigger haul. There is a partial fix for such warp: a media environment where people have skin in the game. But in the United States the next era of news-gathering and dissemination might be against the law. The meta-revelation is as important as the revelation. President Biden was suffering declining acuity in office, and the media didn’t pick it up or report on it. The Tapper–Thompson book explores how that came to be. Via AP Photo. It is not lost on the commentariat that the authors are confessing their own failings and the failings of DC’s journalists generally. On the left, Jon Stewart issued a characteristic diatribe. Megyn Kelly went after Tapper from the right. We all have reason to lack confidence in our institutions these days. They need some fixing. Today’s more open and decentralized media environment allows for a greater menu of information options, up to and including literal “fake news.” We should not only bear the costs. There are things that could help us grow into an improved media environment and reap its benefits. The incentives for accurate—and intense—research and reporting are clearest in financial markets. Everyone should know the story of Hindenburg Research. The combined media and investment firm investigated public companies, seeking to hasten the fall of the badly run ones while selling their stocks short. Hindenburg’s 2021 blog post about Clover Health is an example of their craft and a dive into the firm’s philosophy. Early this year, its founder, Nathan Anderson, shuttered Hindenburg. I assume, but don’t know, that he simply has enough money now. What if reporters—or anyone—could make money by bringing true information forward? Particularly with something as momentous as a president’s mental health, there is a lot of value to our society from having better information. A system for rewarding people with good information already exists. The leading example is Polymarket, a platform where people can make bets on future events. A recent New York Magazine piece says: Polymarket is unlike other gambling forums in that it’s not a bookie and it doesn’t set the odds of its markets. Each market is simply structured as a question on which you can bet “yes” or “no.” Will Volodymyr Zelenskyyy apologize to Trump? Will Fyre Festival 2 sell out? Will TikTok be banned again before May? Imagine the opportunity for reporters—and for White House staff, visitors to the Oval Office, and political supporters—if they felt they had better information than others about something so important. They could profit from bringing it forward by putting money on the line about future events. But here’s how the New York Magazine piece opens: At 6 a.m. on Wednesday, November 13, eight FBI agents in black windbreakers burst through the door of Shayne Coplan’s Soho apartment with a battering ram, surprising him and his girlfriend in bed. They seized his phone from the bedside table but wouldn’t let him touch it, not even to unlock it, lest he destroy evidence that might criminally implicate him or his company, Polymarket, the popular betting platform that over a week before had set off celebrations at Mar-a-Lago when it showed Donald Trump winning the presidential election well before the networks did. Polymarket is close enough to illegal in the United States that the company endeavors to exclude US users. Last August, eight Democratic members of Congress asked the Commodity Futures Trading Commission to make sure that political prediction markets were illegal. (Tongue in cheek: Did Senator Warren (D-MA) sign on because of dismay with the poor odds Polymarket users gave her crypto legislation a couple months earlier?) I do not think prediction markets are gambling. These are games of skill, not chance—and they are not games, but a new information stew with heavy flavors of journalism and investment. There is a lot to learn about prediction markets and their permutations for good and evil. (The paragon of potential evil usage is the “assassination market.”) Tremendous value for society can be unlocked by lining up incentives to discover and publish true information. There is no better illustration of the need than the case of President Biden’s declining acuity, hidden in plain sight of what should be the world’s best and most dogged journalists. I send my best wishes to Joe Biden with respect to his recently reported health issues. Learn more: “Misinformation” Is Condescending: Do Better, Elites | DOGE, Open Up the MAX Database! | Haste Controls Waste! A Theory of Reform | Brilliant Ideas on the Cutting Room Floor The post No More Tappers: Get Skin in the Game appeared first on American Enterprise Institute - AEI.

    Regulating Complex and Uncertain AI Technologies

    A common cognitive bias, in which decision-makers unconsciously substitute a complex problem with a simpler, related one, was first described in 2002 by Daniel Kahneman and Shane Frederick. The concept of attribute substitution explains that, when faced with a complex judgment (target attribute), some may replace it with a more accessible, simpler judgment (heuristic attribute) without realizing it—inadvertently responding only to the simpler problem. This process is automatic and often goes undetected by the individual, leading to systematic cognitive biases. Similarly, policymakers’ tendencies to regulate new technologies based on their experiences with related legacy systems has become evident in recent history. New broadband technologies operating across multiple infrastructures have been regulated as if they were legacy telephony systems entrenched in infrastructure monopolies—the ill-fated unbundling and access regulation rules that would have worked well in the telephony systems have, in broadband, significantly deterred investment in rival fiber and cable infrastructures in the European Union. The United States avoided this fate only because real competition from unregulated cable operators already existed. The subsequent decision not to bind information services with the telephony regulatory legacy became the solution. Going back a little further and considering regulations designed to keep people “safe” from the dangers of a new technology, we find the regulatory response to the emergence of a new general purpose technology—the locomotive (self-powered vehicle). The pioneering UK Locomotives Act 1865 required that a person, while any Locomotive is in motion, shall precede such Locomotive on Foot by not less than Sixty Yards, and shall carry a Red Flag constantly displayed, and shall warn the Riders and Drivers of Horses of the Approach of such Locomotives, and shall signal the Driver thereof when it shall be necessary to stop, and shall assist Horses, and Carriages drawn by Horses, passing the same. Via Look and Learn. The effect was that the locomotive (and subsequently, horseless carriages—or cars) could go no faster than a person could walk. It was repealed in 1896, some considerable time after steam- and internal combustion engine-powered vehicles were on the road. As the image shows, Vermont passed a similar red flag law in 1894, but it was repealed just two years later. Problems with these laws arose because horses and carriages had existing road use rights that regulators could not violate—including going fast. But the new locomotive technologies could be regulated, so rules were implemented that the regulators would have liked to impose on horse-drawn carriages—namely, limiting their speed to mitigate the costs of fast-travelling horses getting out of the control of the driver and harming pedestrians. However, the red flag laws failed on three counts. First, they prevented society from benefiting from the features of the new technologies—in this case, faster travel. Second, they were regulating an issue that was far less likely to occur with a locomotive than a horse and carriage; unlike a horse, which has a mind of its own, the locomotive posed a much lower probability of escaping the driver’s control. Third, the regulations shaped the perceptions and behaviors of road users in regard to the new technology. Their understanding of the locomotive was developed under the tightly regulated oversight of the man with the flag. When the rules were repealed, they had no idea that the locomotive could go fast, and they did not take evasive action quickly enough. Many more people were harmed because the laws disincentivized necessary learning. Likewise, a real risk exists in the rush to regulate new artificial intelligence (AI)—including generative pre-trained transformers (AI GPTs) such as ChatGPT and Llama. Decision-makers are substituting their understanding of constraining Good Old-Fashioned AI (GOFAI) big data tools—which respond well to risk management aligned with advancing engineering precision in computing—with the understanding necessary to govern applications in the face of uncertainty and near-infinite variety invoked by the AI GPTs. The complex intertwining of these AI GPTs, where even application developers and the AIs themselves cannot explain how or why they come to certain outcomes, with equally complex human commercial and social systems is unprecedented. Regulating AI GPTs as if they were GOFAIs invokes all three risks of the red flag laws: New benefits from novel technologies will be lost, the potential harms are not necessarily the same, and people’s behavior will change in the presence of the new rules. The new world created with AI GPTs is certainly uncertain, so regulation should be guided by knowledge of its complexity and uncertainty—resisting the rush to regulate and, instead, learning through experiences of the new and not past fears of the old. Learn more: China’s AI Strategy: Adoption Over AGI | How Much Might AI Legislation Cost in the US? | The Best AI Law May Be One That Already Exists | AI’s Emerging Paradox The post Regulating Complex and Uncertain AI Technologies appeared first on American Enterprise Institute - AEI.

Add a blog to Bloglovin’
Enter the full blog address (e.g. https://www.fashionsquad.com)
We're working on your request. This will take just a minute...